Learning Multiple Quantiles With Neural Networks
نویسندگان
چکیده
We present a neural network model for estimation of multiple conditional quantiles that satisfies the noncrossing property. Motivated by linear quantile regression, we propose with inequality constraints. In particular, to use first-order optimization method, develop new algorithm fitting proposed model. This gives nearly optimal solution without projected gradient step requires polynomial computation time. compare performance our existing models on simulated and real precipitation data. Supplementary materials this article are available online.
منابع مشابه
Reinforcement Learning in Neural Networks: A Survey
In recent years, researches on reinforcement learning (RL) have focused on bridging the gap between adaptive optimal control and bio-inspired learning techniques. Neural network reinforcement learning (NNRL) is among the most popular algorithms in the RL framework. The advantage of using neural networks enables the RL to search for optimal policies more efficiently in several real-life applicat...
متن کاملReinforcement Learning in Neural Networks: A Survey
In recent years, researches on reinforcement learning (RL) have focused on bridging the gap between adaptive optimal control and bio-inspired learning techniques. Neural network reinforcement learning (NNRL) is among the most popular algorithms in the RL framework. The advantage of using neural networks enables the RL to search for optimal policies more efficiently in several real-life applicat...
متن کاملLearning Multiple Timescales in Recurrent Neural Networks
Recurrent Neural Networks (RNNs) are powerful architectures for sequence learning. Recent advances on the vanishing gradient problem have led to improved results and an increased research interest. Among recent proposals are architectural innovations that allow the emergence of multiple timescales during training. This paper explores a number of architectures for sequence generation and predict...
متن کاملLearning Non-overlapping Convolutional Neural Networks with Multiple Kernels
In this paper, we consider parameter recovery for non-overlapping convolutional neural networks (CNNs) with multiple kernels. We show that when the inputs follow Gaussian distribution and the sample size is sufficiently large, the squared loss of such CNNs is locally strongly convex in a basin of attraction near the global optima for most popular activation functions, like ReLU, Leaky ReLU, Squ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Computational and Graphical Statistics
سال: 2021
ISSN: ['1061-8600', '1537-2715']
DOI: https://doi.org/10.1080/10618600.2021.1909601